perm filename MOTIVA[S89,JMC] blob sn#873672 filedate 1989-05-26 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00003 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	%motiva[s89,jmc]		Theory of motivation
C00010 00003	\smallskip\centerline{Copyright \copyright\ 1989\ by John McCarthy}
C00011 ENDMK
C⊗;
%motiva[s89,jmc]		Theory of motivation
\input memo.tex[let,jmc]
\tolerance=700

\title{Notes on Cognitive Theories of Motivation}

	These notes were inspired by a lecture of J. Dansy of
Keel University given at Stanford in May 1989 entitled
``Cognitive Theories of Motivation''.  Dansy discussed the issue,
raised by philosophers from Hume on, about the structure of
motivation, especially about what division might or might not be
required between beliefs and desires and what their roles might
be.  It seemed to me that some of the issues raised were not
significant, because certain entities could be classified as
beliefs or not according to convenience.  The reasons for this
opinion come from taking an artificial intelligence (AI) approach
to the problem.

	The AI approach considers belief and motivation in terms
of designing a computer program that will behave rationally, i.e.
{\it it will do what it thinks will satisfy its desires}.
  Philosophical questins are important to AI,
because developing an intelligent computer program that reasons
with beliefs represented as sentence in logic requires providing
it with some kind of metaphysics, ontology and epistemology as a
framework in which to put particular facts.  Conversely, thinking
about designing a program leads to some concrete approaches
to the philosophical problems.

	A program that reasons logically
was proposed as a means of getting computer programs with
common sense in (McCarthy 1959).  Further developments are
discussed in a number of papers, but I will mention only
(McCarthy and Hayes 1969), (McCarthy 1986), (Lifschitz 198xx) and
(McCarthy 1989).

	We provide the program with a database of common sense
knowledge expressed in some logical language.  We arrange that
its sensory inputs cause sentences describing its observations
to appear in certain data structures.  The program reasons by
logical inference, including nonmonotonic inference (McCarthy
1986).  This inference causes new sentences to appear.
The sentences that appear in a certain distinguished data
structure are called the beliefs of the program.  The justification
for calling them beliefs will be the way they function.

1. If the program only reasons, it will never perform any
external action, i.e. it won't act on its beliefs.  Therefore,
we need some mechanism for beliefs to result in action.
This can be done in a variety of ways, and these ways
correspond somewhat to different points of view that
have been taken in philosophical discussions of motivation.
However, they seem to me to be so nearly equivalent
that much of the discussion about which is right seems
beside any possible point.

2. Here's the simplest way of providing for action.  Let there
be sentences of the form $decide\_to\_do(x)$.  The
intended meaning is that believing this sentence means that
the program has decided to do the action $x$ right away.
We then suppose that the program includes a subroutine
that checks for the appearance of a sentence of the required
form among the beliefs, further requiring that $x$ be of
a form that denotes an immediately performable action.
When this occurs, the program performs the action.

3. If the original axioms of the system never contained
the predicate 
$decide\_to\_do$,
then no conclusions would contain it either, so again
no external action would be taken.  Now suppose we
have an axiom of the form
%
$$empty\_stomach ⊃ 
decide\_to\_do(eat),$$
%
and this was the only axiom involving
$decide\_to\_do(x)$.  This system could be
regarded as one without desires, because there
would be a direct reflex-like relation between an empty stomach
and the act of eating with no intermediate desire.
Moreover, a program that answers questions about its database
wouldn't answer that it was hungry, because no such sentence
appears in the database.

	However, if we break the
axiom in two getting
%
$${\sl empty\_stomach} ∧ \ltup other\_conditions\rtup ⊃ want(eat)$$
%
and
%
$$want(eat) ∧ \ltup other\_conditions\rtup ⊃
decide\_to\_do(eat),$$
%
then we have put in a desire as an intermediary.  This is not
logically fundamental.  However, it may be practically essential
if we want our system to behave as if having an empty stomach
doesn't always produce a desire to eat, and having a desire to
eat doesn't always lead immediately to eating.  It seems likely
that desires function as such intermediaries in humans as well.

	In this formalization, desires function rather independently
of one another.  Some philosophical points of view propose that
desires are subordinate to a general desire for welfare.  This
can be handled by 

\smallskip\centerline{Copyright \copyright\ 1989\ by John McCarthy}
\smallskip\noindent{This draft of MOTIVA[S89,JMC]\ TEXed on \jmcdate\ at \theTime}
\vfill\eject\end